Join our Folding@Home team:
Main F@H site
Our team page
Support us: Subscribe Here
and buy SoylentNews Swag
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
[Ed's Comment - From Wikipedia, the free encyclopedia:
The French HADOPI law (French: Haute Autorité pour la Diffusion des Œuvres et la Protection des droits d'auteur sur Internet,[1][a] English: "Supreme Authority for the Distribution of Works and Protection of Copyright on the Internet") or Creation and Internet law (French: la loi Création et Internet) was introduced during 2009, providing what is known as a graduated response as a means to encourage compliance with copyright laws. HADOPI is the acronym of the government agency created to administer it.
Comment Ends --JR]
Today, the Conseil d’État (the French Administrative Supreme Court) ruled [PDF in French -Ed] in favor of La Quadrature du Net, French Data Network (FDN), Franciliens.net and Fédération FDN [sites in French -Ed]. It recognised that Hadopi's surveillance system (operated by Arcom since 2021) is a breach of fundamental rights protected by the European Union. As a result, it has ordered the government to repeal the core provisions of Hadopi key decree that organises the "graduated response" system. This fight against Hadopi, in which La Quadrature is involved since the first legislative debates in the National Assembly in 2009, is emblematic of the archaic view held by successive governments, both left-wing and right-wing, on the question of sharing online culture and knowledge. It is now up to the government to acknowledge the death of Hadopi and, instead of attempting to bring it back to life, to finally admit that online cultural sharing for non-commercial purposes must not be criminalised.
La Quadrature du Net started its challenge in court back in 2009 as to whether the law was actually compatible with European Union Law and human rights. The law was named after the The French Copyright Authority (HADOPI).
Previously:
(2026) France Keeps Breaking the Internet to Stop Piracy, Even Though It's Not Working
(2021) France Gets a New Anti-Piracy Agency in 2022
Apple has agreed to pay $250 million to settle a class action lawsuit that accused it of misleading customers about the availability of its Apple Intelligence features. The proposed settlement would apply to people in the US who purchased all models of the iPhone 16 and the iPhone 15 Pro between June 10th, 2024 and March 29th, 2025.
People who submit qualifying claims can receive $25 for each eligible device, "which may decrease or increase up to $95 per device, depending on claim volume and other factors," according to Clarkson Law Firm, the legal team behind the class action lawsuit.
The settlement will resolve a 2025 lawsuit, alleging Apple's advertisements created a "clear and reasonable consumer expectation" that Apple Intelligence features would be available with the launch of the iPhone 16. The lawsuit claimed Apple's products "offered a significantly limited or entirely absent version of Apple Intelligence, misleading consumers about its actual utility and performance."
In a statement to The Verge, Apple spokesperson Marni Goldberg said the company "resolved this matter to stay focused on doing what we do best, delivering the most innovative products and services to our users." You can read Apple's full statement at the bottom of this article.
Apple previewed a series of AI-powered features coming to its iPhones during its June 2024 Worldwide Developers Conference, including a more personalized Siri. But when the iPhone 16 launched in September, Apple labeled it as "built for Apple Intelligence," as it lacked many of the capabilities it teased months earlier.
Instead, Apple gradually rolled out its new AI features, including Image Playground, Genmoji, and a ChatGPT integration in Siri. The company also delayed the launch of its more personalized Siri, which is now expected to arrive later this year.
Last April, the National Advertising Division recommended that Apple "discontinue or modify" its "available now" claim for the Apple Intelligence page on its website. Apple also pulled an iPhone 16 ad showing actor Bella Ramsey using the AI-upgraded Siri.
Apple denied any wrongdoing. Here's the company's full statement:
Since the launch of Apple Intelligence, we have introduced dozens of features across many languages that are integrated across Apple's platforms, relevant to what users do every day, and built with privacy protections at every step. These include Visual Intelligence, Live Translation, Writing Tools, Genmoji, Clean Up and many more.
Apple has reached a settlement to resolve claims related to the availability of two additional features. We resolved this matter to stay focused on doing what we do best, delivering the most innovative products and services to our users.
Ah, nostalgia. The taste of Mum's secret-sauce pasta, the endless summers, that one time Fat Nadya was going to show her boobs in the bushes behind Ms Wolowitz house ... and soon, dear reader, the undescribable pleasure of wasting time selecting cars, fire hydrants, traffic lights and the like for the fourteenth time just to read or buy something online.
For Google has declared that the Olden Ways are over, as these are agentic times, and it is necessary to let your computer do the routine stuff for you, like booking a month-long cruise in the Caribbean or something. So, no more old captcha: it's ReCaptcha Version II now, and you, yes you, will from now on be obligated to prove you're not (another) machine by taking a picture with your smartphone (machine) which, of course, must be authenticated itself to the Google Machine, to prove you're not a, you guessed it, machine. (Oblig funny monkey clip here [Video not reviewed. -Ed])
Somehow I got the feeling that the only purpose of a human in the not so distant future will be to sign off (minute 21 and beyond) for a machine, and pay its bills.
I guess that's called winning -- by the machines.
https://archive.ph/TCsXg (Actually a NYT article)
There is a moment when internet companies get the stink of death on them. For AOL, it was 2003, when it became clear that its users were abandoning its clunky dial-up internet service for far-faster broadband. For Yahoo, it was 2015, when their last-ditch acquisition spree failed, and they sold themselves to Verizon.
For Meta, that time is now. I believe the company — one of the most powerful media organizations in the world and one of the most valuable members of the S&P 500 — is at the start of a long, slow decline that will trigger aftershocks to our economy and our society.
It may be named Meta, but the company's biggest asset is still Facebook. Started from a Harvard dorm, the original online social network has dominated our world for two decades. Its three billion users are still bigger than any single country. Its platforms can help sway an election, fuel an insurrection or spark a genocide.
But if you look carefully, you can see chinks in the armor. Meta's earnings are starting to show the strain from years of growing consumer disaffection and reckless spending. The latest earnings, released on April 29, revealed a dip in user numbers for the first time since it started reporting these figures. And the slumping stock confirms what we have all known in our guts for a while: This is a company entering its zombie era.
This directive — first uncovered by Russian independent journalist Maria Kolomychenko, and reported by the Russian version of Radio Free Europe — [site in Russian- Ed] marks a major escalation in the Kremlin's long-running effort to control what its citizens see online and cut them off from the open internet.
The subsidy document allocates roughly 20 billion rubles annually for the operation of ASBI. This figure corroborates a September 2024 report that authorities intended to spend 60 billion rubles (around $650 million) over the next five years to update its internet-blocking system.
A critical detail is that the Russian government hasn’t defined what "92% effectiveness" actually means. Kolomychenko noted it could refer to the number of VPN applications removed from stores, the volume of traffic blocked, or the percentage of people unable to connect.
This marks a fundamental shift in how Russia governs the internet. Rather than chasing down individual services one by one, the state is now pouring money into the underlying network layer to build a permanent filter.
By placing these filters directly in the network path, Roskomnadzor aims to make bypassing blocks a constant uphill battle for users.
Since the invasion of Ukraine, censorship has expanded from specific news outlets to targeting major social media platforms and messaging tools.
Millions of websites have been blocked, and as of 2025, authorities have started cutting off mobile internet across entire regions. They’ve also officially blocked major platforms like WhatsApp and Telegram.
So far, more than 400 VPN services have been banned, with over 1,000 restricted, according to another Russian journalist, Aleksandar Djokic. This, even though it’s still legal to use a VPN in Russia.
Starting April 15, 2026, major Russian service providers are legally required to detect whether a user is connected via a VPN, raising concerns about data privacy and potential future profiling.
At the same time, the Ministry of Digital Development is also pushing a new "foreign traffic tax". It would charge mobile users 150 rubles per gigabyte for any data over a 15GB monthly limit. This fee, which has been facing technical delays, hits the international routes VPNs rely on, making it too costly for most people to bypass the blocks.
Since their introduction in the 1960s, lasers have fueled major advances in science and everyday technology, from supermarket scanners to eye surgery. Traditional lasers operate by controlling photons, which are particles of light. Over the past two decades, researchers have expanded this concept to other particles, including phonons, which represent tiny units of vibration or sound. Learning to control phonons could unlock new capabilities, including access to unusual quantum effects such as entanglement.
A team from the University of Rochester and Rochester Institute of Technology has developed a new squeezed phonon laser that can precisely control vibrations at the nanoscale. This level of control may help scientists better understand gravity, particle acceleration, and the principles of quantum physics. In their study published in Nature Communications, the researchers explain how they guided these small units of mechanical motion to behave in a coordinated, laser-like manner.
Nick Vamivakas, the Marie C. Wilson and Joseph C. Wilson Professor of Optical Physics with the URochester Institute of Optics, previously demonstrated a phonon laser in 2019. In that work, phonons were trapped and levitated using an optical tweezer inside a vacuum. However, turning this concept into a practical tool for precise measurement required addressing a major limitation shared by both photon and phonon lasers: noise. These unwanted fluctuations can interfere with signals and reduce measurement accuracy.
“While a laser looks to the naked eye like a steady beam, there’s actually a lot of fluctuation, which causes noise when you’re using lasers for measurement,” says Vamivakas. “By pushing and pulling on a phonon laser with light in the right way, we can reduce that phonon laser fluctuation significantly.”
The researchers tackled this challenge by using a method known as squeezing to lower the thermal noise within the phonon laser. Reducing this background disturbance makes it possible to take more precise measurements. According to Vamivakas, this improvement allows acceleration to be measured more accurately than with approaches that rely on photon lasers or radio frequency waves.
With its enhanced sensitivity, the phonon laser could become a valuable tool for measuring gravity and other forces with high precision. This capability may support new navigation technologies. Scientists have proposed quantum compasses as highly accurate, “unjammable” alternatives to GPS navigation that do not depend on satellites. Vamivakas is interested in exploring whether phonon lasers could contribute to the development of such systems.
Journal Reference: Zhang, K., Xiao, K., Bhattacharya, M. et al. A two-mode thermomechanically squeezed phonon laser. Nat Commun 17, 2882 (2026). https://doi.org/10.1038/s41467-026-70564-3
The Trump administration is said to be discussing an executive order that would establish a government review process for new AI models before they're released to the public, The New York Times has reported, citing unnamed U.S. officials.
The proposed order would create an "AI working group" of tech executives and government officials to develop oversight procedures, with White House staff briefing leaders from Anthropic, Google, and OpenAI on the plans last week. These discussions, if true, would represent a sharp departure from the administration's current stance as something of a deregulatory champion — immediately upon taking office, the Trump administration revoked a Biden-era executive order addressing AI risks.
The sudden reversal coincides with a leadership vacuum in White House AI policy. David Sacks, who led the administration's deregulation push as AI czar, left the role in March, with White House Chief of Staff Susie Wiles and Treasury Secretary Scott Bessent having since taken a more active role in shaping AI policy, according to The New York Times.
The new approach sounds a lot like the UK's AI Security Institute model, where government bodies evaluate frontier models against safety benchmarks before and after deployment. Officials told the New York Times that the NSA, the Office of the National Cyber Director, and the Director of National Intelligence could oversee the review. Critically, the system would grant the government early access to models without blocking their release.
Perhaps unsurprisingly, the catalyst for all this appears to have been Anthropic’s Mythos model, which the company’s marketing described as capable of finding thousands of critical software vulnerabilities and too dangerous for public release.
That naturally attracted a lot of unwanted government attention at a time when the Trump administration is already locking horns with Anthropic over the collapsed $200 million Pentagon contract. The Pentagon designated Anthropic a supply chain risk after the company refused to remove guardrails on autonomous weapons and mass surveillance, though a federal judge later called that "Orwellian."
The NSA has already used Mythos to assess vulnerabilities in government Microsoft software deployments, even as other agencies remain cut off from Anthropic's tools. Some analysts have questioned whether Mythos's capabilities justify Anthropic's dramatic framing, with some studies finding that cheaper models can achieve comparable results in vulnerability discovery.
A White House official told The New York Times that talk of an executive order is "speculation," and that any announcement would come from Trump himself. Dean Ball, a former senior adviser on AI in the Trump administration, told the newspaper that officials are trying to avoid overregulation while keeping pace with the technology, calling it a “tricky balance.”
Daemon Tools users: It's time to check your machines for stealthy infections, stat:
Daemon Tools, a widely used app for mounting disk images, has been backdoored in a monthlong compromise that has pushed malicious updates from the servers of its developer, researchers said Tuesday.
Kaspersky, the security firm reporting the supply-chain attack, said it began on April 8 and remained active as of the time its post went live. Installers that are signed by the developer's official digital certificate and downloaded from its website infect Daemon Tools executables, causing the malware to run at boot time. Kaspersky didn't explicitly say so, but based on technical details, the infected versions appear to be only those that run on Windows. Versions 12.5.0.2421 through 12.5.0.2434 are affected. Neither Kaspersky nor developer AVB could be contacted immediately for additional details.
Infected versions contain an initial payload that collects MAC addresses, hostnames, DNS domain names, running processes, installed software, and system locales. The malware sends them to an attacker-controlled server. Thousands of machines in more than 100 countries were targeted. Out of the many machines infected, about 12 of them, belonging to retail, scientific, government, and manufacturing organizations, have received a follow-on payload—an indication that the supply-chain attack targets select groups.
[...] One of the follow-on payloads pushed to about a dozen organizations was what Kaspersky described as a "minimalistic backdoor." It has the ability to execute commands, download files, and run shellcode payloads in memory—making the infection harder to detect.
Kaspersky said that it observed a more complex backdoor dubbed QUIC RAT, installed on a single machine belonging to an educational institution located in Russia. Initial analysis found that it can inject payloads into the notepad.exe and conhost.exe processes and supports a variety of C2 communication protocols, including HTTP, UDP, TCP, WSS, QUIC, DNS, and HTTP/3.
The 100 infected organizations were primarily located in Russia, Brazil, Turkey, Spain, Germany, France, Italy, and China. Kaspersky's visibility into the attack is limited because it's based solely on telemetry provided by its own products.
[...] More recent supply-chain attacks have hit Trivy, Checkmarx, and Bitwarden and more than 150 packages available through open source repositories. Last year, there were at least six notable such attacks.
Anyone who uses Daemon Tools should take time to scan the entirety of their machines using reputable antivirus software. Windows users should additionally check for indicators of compromise listed in the Kaspersky post. For more technically advanced users, Kaspersky recommends monitoring "suspicious code injections into legitimate system processes, especially when the source is executables launched from publicly accessible directories such as Temp, AppData, or Public."
The disbelief was palpable when Mozilla's CTO last month declared that AI-assisted vulnerability detection meant "zero-days are numbered" and "defenders finally have a chance to win, decisively."
[...]
Mindful of the skepticism, Mozilla on Thursday provided a behind-the-scenes look into its use of Anthropic Mythos—an AI model for identifying software vulnerabilities—to ferret out 271 Firefox security flaws over two months. In a post, Mozilla engineers said the finally ready-for-prime-time breakthrough they achieved was primarily the result of two things: (1) improvement in the models themselves and (2) Mozilla's development of a custom "harness" that supported Mythos as it analyzed Firefox source code.
[...]
The biggest differentiating factor was the use of an agent harness, a piece of code that wraps around an LLM to guide it through a series of specific tasks. For such a harness to be useful, it requires significant resources to customize it to the project-specific semantics, tooling, and processes it will be used for.Grinstead described the harness his team built as "the code that drives the LLM in order to accomplish a goal. It gives the model instructions (e.g., 'find a bug in this file'), provides it tools (e.g., allowing it to read/write files and evaluate test cases), then runs it in a loop until completion."
[...]
Thursday's behind-the-scenes view includes the unhiding of full Bugzilla reports for 12 of the 271 vulnerabilities Mozilla discovered using Mythos and, to a lesser extent, Claude Opus 4.6.
[...]
At least one researcher said Thursday that a cursory look at the reports showed they were "pretty impressive."
[...]
The critics are right to keep pushing back. Hype is a key method for inflating the already high puffed-up valuations of AI companies. Given the extensive praise Mozilla has given to Mythos, it's easy for even more trusting people to wonder: What's it getting in return? Far from settling the debate, Thursday's elaborations are likely to only further stoke the controversy.
As Americans struggle with the price of gas and groceries, Starbucks CEO Brian Niccol made the case for a $9 cup of coffee while speaking with The Wall Street Journal's What's News AM podcast.
"We're doing really well with Gen Z and millennials, and then really had strong performance across all income cohorts," Niccol said. "It can start with as little as $3 for a traditional cup of coffee. And then obviously you can build your way into all sorts of customized drinks that people love that move that ticket up."
Podcast host Luke Vargas asked, "You mentioned sort of strength across income cohorts. We've heard so much this week about the K-shaped economy. Fortunes for some Americans, very different than for others. Is that not really something that's coming up in your sales?"
"You know, we're not seeing that in our business," Niccol said. "What we're seeing is people, you know, they want to have a special experience, and regardless of what your income level is. In some cases, you know, a $9 experience does feel like you're splurging. And then, what that means is we have to make it worthwhile, right?"
"And then in other cases, certain people believe, 'Well, this is a really affordable premium experience.' Because they're saying like, 'Well it's less than $10 and I get a really premium experience,'" Niccol said. "So, regardless of where you're stationed in those income cohorts, we want to make that experience worth your while. And what we know is what's definitely something that drives that value is to be able to have a great seat, have a great moment of connection with a barista."
"We just saw on Friday, I'm sure you've seen the US consumer confidence reading, perceptions of the economy are worse than they've been since the '70s, since '08, since the pandemic," Vargas said. "These are some pretty bad reference points here. Just how do you market to that consumer?"
"Yeah. Look, when we've spent the time talking to customers, 'What is it that you're looking for in your experience?' They do talk about how they use their Starbucks experience as a moment of escapism. And my hope is we get more than our fair share of all those occasions," replied Niccol.
"Part of that is you're not playing the value game," Vargas suggested.
"Well, I think we're just playing it in a different way, which is the way we're going to play the value game is you're going to feel like it was worth it," said Niccol. "And it's not going to be a game of discounting or one-off promotions. I think people actually really do appreciate knowing, "Hey, if this is a $3 cup of coffee or a $5 latte, I know I'm going to get a great experience for that $5 experience, I'm in."
https://news.mit.edu/2026/astronomers-pin-down-origins-planetary-odd-couple-0505
Across the Milky Way galaxy, a planetary odd couple is circling a star some 190 light years from Earth. A normally "lonely" hot Jupiter is sharing space with a mini-Neptune, in a rare and unlikely pairing that's had astronomers puzzled since the system's discovery in 2020.
Now MIT scientists have caught a glimpse into the atmosphere of the mini-Neptune, which is circling inside the orbit of its Jupiter-sized companion, and discovered clues to explain the origins of this unusual planetary system.
In a study appearing today in Astrophysical Journal Letters, the scientists report on new measurements of the mini-Neptune's atmosphere, made using NASA's James Webb Space Telescope (JWST). It is the first time astronomers have measured the composition of a mini-Neptune that resides inside the orbit of a hot Jupiter.
Their measurements reveal that the smaller planet has a "heavy" atmosphere that is rich with water vapor, carbon dioxide, sulfur dioxide, and hints of methane. Such a heavy atmosphere would not have been acquired by the planet if it had formed in its current location, very close to its star.
Instead, the scientists say their findings point to an alternate origin story: Both the mini-Neptune and the hot Jupiter may have formed much farther away, in the colder region of the system's early disk of protoplanetary material. There, the planets could slowly build up atmospheres of ice and other volatiles. Over time, the planets were likely drawn in toward the star in a gradual process that kept them close, with their atmospheres intact.
The team's results are the first to show that mini-Neptunes can form beyond a star's "frost line." This boundary refers to the minimum distance from a star where the temperature is low enough that water instantly condenses into ice.
"This is the first time we've observed the atmosphere of a planet that is inside the orbit of a hot Jupiter," says Saugata Barat, a postdoc in MIT's Kavli Institute for Astrophysics and Space Research and the lead author of the study. "This measurement tells us this mini-Neptune indeed formed beyond the frost line, giving confirmation that this formation channel does exist."
The team consists of astronomers around the world, including Andrew Vanderburg, a visiting assistant professor at MIT, and co-authors from multiple other institutions including the Harvard and Smithsonian Center for Astrophysics, the University of South Queensland, the University of Texas at Austin, and Lund University.
As their name implies, mini-Neptunes are planets that are less massive than Neptune. They are considered to be gas dwarfs, which are made mostly of gas, with an inner, rocky core. Mini-Neptunes are the most commonly found planet in the Milky Way, though, interestingly, no such world exists in our own solar system. Astronomers have observed many planets circling a wide variety of stars in a range of planetary systems. Mini-Neptunes, then, are generally considered to be garden-variety planets.
But in 2020, Chelsea X. Huang, then a Torres Postdoctoral fellow at MIT (now on the faculty at University of South Queensland), discovered a mini-Neptune in a rare and puzzling circumstance: The planet appeared to be circling its star with an unlikely companion — a hot Jupiter.
The astronomers made their discovery using NASA's Transiting Exoplanet Survey Satellite (TESS). They analyzed TESS' measurements of TOI-1130, a star located 190 light years from Earth, and detected signs of a mini-Neptune and a hot Jupiter, orbiting the star every four and eight days respectively.
"This was a one-of-a-kind system," says Huang. "Hot Jupiters are 'lonely,' meaning they don't have companion planets inside their orbits. They are so massive, and their gravity is so strong, that whatever is inside their orbit just gets scattered away. But somehow, with this hot Jupiter, an inner companion has survived. And that raises questions about how such a system could form."
The 2020 discovery of TOI-1130 and its odd planetary pair inspired Huang, Vanderburg, and their colleagues to take a closer look at the planets, and specifically, their atmospheres, with JWST. In its new study, the team reports its analysis of TOI-1130b — the inner-orbiting mini-Neptune.
Catching the planet at just the right time was their first challenge. Most planets circle their star with a regular, predictable period, like the tick of a clock. But the mini-Neptune and the hot Jupiter were found to be in "mean motion resonance," meaning that each can affect the other's motion, pulling and tugging, and slightly varying the time each takes to orbit their star. This made it tricky to predict when JWST could get a clear view.
The team, led by Judith Korth of Lund University, assembled as many past observations of the system as they could, and developed a model to predict when each planet would pass by the star at an angle that JWST could observe.
"It was a challenging prediction, and we had to be spot-on," Barat says.
In the end, the team was able to catch a direct and detailed snapshot of both planets.
"The beauty of JWST is that it does not observe just in one color, but at different colors, or wavelengths," Barat explains. "And the specific wavelengths that a planet absorbs can tell you a lot about the composition of its atmosphere."
From JWST's measurements, the team found that the planet absorbed wavelengths specifically for water, carbon dioxide, sulfur dioxide, and to a lesser degree, methane. These molecules are heavier than hydrogen and helium, which constitute lighter atmospheres. Astronomers had assumed that, if mini-Neptunes formed very close to their star, they should have light atmospheres.
But the team's new results counter that assumption and offer a new way that mini-Neptunes could form. Since heavier molecules were found in the atmosphere of TOI-1130b, which resides very close to its star, the scientists say the only possible explanation for its composition is that the planet formed much farther out than its current location.
The planet likely accumulated its heavy atmosphere of water and other volatiles such as carbon dioxide and sulfur dioxide in the icy region beyond the star's frost line. In this much colder environment, water condenses onto bits of dust to form icy pebbles, which an infant planet can draw into its atmosphere. The water evaporates as it slowly migrates in closer to its star.
Barat says the team's detection of heavy molecules in the atmosphere of TOI-1130b confirms that the planet — and likely its hot Jupiter companion — formed in the outskirts of the system. Through gradual migration, the two planets would be able to stay close together and keep their atmospheres intact.
"This system represents one of the rarest architectures that astronomers have ever found," Barat says. "The observations of TOI-1130b provide the first hint that such mini-Neptunes that form beyond the water/ice line are indeed present in nature."
Nissan has reversed course on plans to build electric vehicles at its Mississippi assembly plant and will instead equip the factory to produce a range of body-on-frame trucks and SUVs, a shift that changes the company's manufacturing footprint and signals a renewed focus on larger, conventional vehicles. The decision affects local supply chains, workforce planning, and Nissan's place in the broader auto market as demand patterns evolve:
The move away from EV production at the Mississippi site is a clear operational pivot for Nissan, trading a battery-driven future for heavier, body-on-frame vehicles built for hauling and towing. That choice reflects a reassessment of where the company sees near-term returns and where it wants to allocate manufacturing capacity. It is notable because it alters expectations that U.S. plants would be central to Nissan's electric vehicle rollout.
From a market perspective, trucks and large SUVs remain strong sellers in the United States, and automakers chase profitable segments when margins are tight. Body-on-frame designs are traditionally preferred for towing and rugged use, and customers who prioritize those capabilities have kept demand elevated. Nissan's decision seems tied to serving established buyer preferences rather than betting exclusively on a rapid surge in EV adoption.
[...] Local employment effects will be significant but mixed, and the specifics will depend on how Nissan structures the transition and retraining programs. Body-on-frame production can support a wide range of skilled positions, but the mix of jobs differs from an EV-focused plant where battery technicians and electrical specialists are more in demand.
[...] There are also ripple effects for suppliers and battery ecosystem plans that may have counted on Nissan's EV commitment at that location. Battery cell makers, electric motor suppliers, and companies building charging infrastructure could see fewer business opportunities tied to this plant. At the same time, chassis, frame, and drivetrain suppliers that serve conventional trucks may find fresh demand.
[...] Regulatory and incentive environments sometimes nudge automakers toward or away from certain investments, but manufacturers also follow clear market signals. Incentives, fuel-economy rules, and consumer tax credits all play roles in decision making, yet companies still prioritize segments with stable, profitable demand. Nissan's choice suggests a pragmatic approach to those competing pressures.
Previously:
Research scientist and avid skier Erik Johannes Husom has built his own pair of wooden skis from scratch and documented the process in images. That includes felling the tree and splitting it. He has a short video demonstrating the effectiveness of his new skis in actual use.
The main stages of the process included felling the tree, debarking it, cutting it down to suitable length, and splitting it into two halves. This was followed by shaping the wood, first using an axe, and then hand planes. The final polish was done using a knife and finally sandpaper.
The most challenging part came afterwards: Steam bending the wood to give the skis a raised tip (a "shovel"). I had trouble getting the wood soft enough to get a proper bend on it, and ended up using a combination of boiling and steaming to achieve this. The result was less than optimal, but I learnt some lessons on how to achieve a proper bend for the next pair of skis.
I'll add that split wood is quite flexible and by far stronger than sawed wood. So by splitting one can get greater strength with much less weight. Splitting was integral in how viking ships were made strong enough to be seaworthy on the open ocean and yet light enough for extended river ventures and even occasional portages.
Previously:
(2018) Attention Backcountry Skiers: Scientists Want Your Help - SoylentNews
Investigators spent weeks unravelling enthusiast's bedroom project
A university student in Taiwan is out on bail after being accused of interfering with signals sent to the country's high-speed rail network, bringing trains to a halt.
In a statement issued to local media, Taiwan High Speed Rail (THSR) confirmed that a 48-minute disruption to three trains on April 5 was caused by a rogue General Alarm signal of the kind triggered by specialized equipment used by train station staff.
This General Alarm signal was sent via a terrestrial trunked radio (TETRA) handset at Taichung Station. Staff followed protocol and engaged their emergency response plans, which in turn instructed trains to manually stop.
These devices are not the kind commuters usually carry around. Functioning much like walkie-talkies, they are typically used by station staff to communicate with one another and train drivers.
Given that these devices are most commonly used by station staff, officials initially assumed it was an inside job.
Rail police and telecommunications investigators probed the case after station control room staff ruled out the possibility of official equipment having been stolen or misused by staff.
Taiwan's Major Criminal Cases Unit joined the investigation on April 13 after chief prosecutor Chang Chun-hui deemed the case a threat to transportation safety.
Over the course of two weeks, detectives concluded that the attack was likely carried out by the 23-year-old student, identified only by his surname Lin, who was described as a radio enthusiast.
According to statements reported by UDN, officials believe Lin exploited a vulnerability in the TETRA communication network and remotely triggered the General Alarm signal using unspecified electromagnetic equipment, which police said they found while searching his residence and workplace.
Following the raids, police seized seven radio devices, a laptop, two smartphones, and what appeared to be a software-defined radio (SDR) filter.
Officials said they believe the way Lin allegedly triggered the General Alarm was rudimentary and involved cloning signals using equipment he bought online.
Lin allegedly connected a radio to his laptop via an SDR filter, which captured the radio signal used by THSR, and configured his radio device to transmit the same signal, allowing him to trigger the General Alarm in a way that appeared to come from a station employee.
Police arrested Lin on April 28, and after questioning on April 29, concluded he was most likely behind the disruption. They released Lin on bail, convinced there was no need for further detention, setting the value at NT$100,000 ($3,183).
Nvidia is also resurrecting a very old GPU to cope with VRAM supply woes:
We've had another warning from a major memory chipmaker that the RAM crisis will only worsen, and rumors continue to circulate that Nvidia could bring back an old GPU – from two generations ago – to help deal with video RAM woes.
Wccftech reports that Micron just posted record Q2 revenue, fuelled by AI demand, and the company's CEO, Sanjay Mehrotra, observed that this demand isn't going away – and in fact will only get stronger.
In an interview, Mehrotra told CNBC that [video not reviewed]: "AI is in very early innings; you just saw at GTC how much advances are being made in AI. And memory is a strategic asset; you need more memory, you need faster performance memory in order for AI to be able to deliver its full capabilities."
[...] Meanwhile, as VideoCardz recently pointed out, there are continued rumors that the RTX 3060 is going to be resurrected in its 12GB incarnation. This is according to the Board Channels [source in Chinese], a source of supply chain rumors over in China, and we're told production of the RTX 3060 could be fired up in June.
[...] It's very much a reflection of the situation with video RAM, and while 12GB is a considerable loadout for a budget graphics card, it's GDDR6 memory rather than the current generation, which uses GDDR7. Therefore, it won't interfere with the inventory of the latter.
[...] What's also worrying is that Micron isn't saying this in isolation. In fact, both the other major players in terms of RAM manufacturers, Samsung and SK Hynix, have issued similar (or more dire) warnings of their own.
Samsung recently said that it expects "significant shortages" across its memory products to last through to 2028 (at least), and SK Hynix previously warned that we could be dealing with the fallout from the RAM crisis until as late as 2030.
With all three memory-making giants issuing these kinds of ominous statements, and the likes of Nvidia rumored to be resurrecting old GPUs to get around video RAM supply constraints, the prospect of the RAM crisis easing off any time soon doesn't seem likely.